Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations). Code and videos are available at: https://nicklashansen.github.io/modemrl
translated by 谷歌翻译
Single-cell technologies are revolutionizing the entire field of biology. The large volumes of data generated by single-cell technologies are high-dimensional, sparse, heterogeneous, and have complicated dependency structures, making analyses using conventional machine learning approaches challenging and impractical. In tackling these challenges, deep learning often demonstrates superior performance compared to traditional machine learning methods. In this work, we give a comprehensive survey on deep learning in single-cell analysis. We first introduce background on single-cell technologies and their development, as well as fundamental concepts of deep learning including the most popular deep architectures. We present an overview of the single-cell analytic pipeline pursued in research applications while noting divergences due to data sources or specific applications. We then review seven popular tasks spanning through different stages of the single-cell analysis pipeline, including multimodal integration, imputation, clustering, spatial domain identification, cell-type deconvolution, cell segmentation, and cell-type annotation. Under each task, we describe the most recent developments in classical and deep learning methods and discuss their advantages and disadvantages. Deep learning tools and benchmark datasets are also summarized for each task. Finally, we discuss the future directions and the most recent challenges. This survey will serve as a reference for biologists and computer scientists, encouraging collaborations.
translated by 谷歌翻译
对应匹配是计算机视觉和机器人技术应用中的一个基本问题。最近使用神经网络解决对应匹配问题最近正在上升。旋转等级和比例等级性在对应匹配应用中都至关重要。经典的对应匹配方法旨在承受缩放和旋转转换。但是,使用卷积神经网络(CNN)提取的功能仅在一定程度上是翻译等值的。最近,研究人员一直在努力改善基于群体理论的CNN的旋转均衡性。 SIM(2)是2D平面中的相似性转换组。本文介绍了专门用于评估SIM(2) - 等级对应算法的专门数据集。我们比较了16个最先进(SOTA)对应匹配方法的性能。实验结果表明,在各种SIM(2)转换条件下,组模棱两可算法对于对应匹配的重要性。由于基于CNN的对应匹配方法达到的子像素精度不令人满意,因此该特定领域需要在未来的工作中获得更多关注。我们的数据集可公开可用:mias.group/sim2e。
translated by 谷歌翻译
跟踪位置和方向独立提供了更敏捷的动作,以实现过度射击的多旋翼无人机(UAV),同时引入了不希望的倒入效果;推力发电机产生的倾斜流可能会因接近性而抵消其他流动,从而极大地威胁了平台的稳定性。建模空气动力气流的复杂性挑战了适当补偿这种副作用的算法。利用无人机分配的输入冗余,我们通过新的控制分配框架来解决此问题,该框架考虑了倾斜效果,并探索了整个分配空间以获得最佳解决方案。该最佳解决方案避免了倾斜效果,同时在硬件约束中提供了高推力效率。据我们所知,我们的是第一个调查对过度驱动无人机的倾斜影响的正式推导。我们在模拟和实验中验证了不同硬件配置的框架。
translated by 谷歌翻译
检测有益特征交互在推荐系统中至关重要,现有方法通过检查所有可能的特征交互来实现这一目标。但是,检查所有可能的高阶特征相互作用的成本是过于良好的(随着阶的增加而呈指数增长)。因此,现有方法仅检测有限的顺序(例如,最多四个功能的组合)有益特征交互,这可能会错过高于限制的订单的有益特征相互作用。在本文中,我们提出了一个名为HIRS的高图神经网络模型。 HIRS是直接产生任意订单的有益特征相互作用并相应地进行建议预测的第一项工作。生成的特征交互的数量可以指定比所有可能的交互的数量小得多,因此我们的模型承认运行时间要低得多。为了获得有效的算法,我们利用了有益特征相互作用的三种特性,并提出了基于深入的Infomax的方法来指导相互作用的产生。我们的实验结果表明,就建议准确性而言,HIRS的效果优于最先进的算法。
translated by 谷歌翻译
我们设计了一个合作规划框架,为束缚机器人Duo产生最佳轨迹,该轨迹是用柔性网聚集在大面积中蔓延的散射物体。具体地,所提出的规划框架首先为每个机器人生产一组密集的航点,用作优化的初始化。接下来,我们制定迭代优化方案,以产生平滑和无碰撞的轨迹,同时确保机器人DUO内的合作,以有效地收集物体并正确避免障碍物。我们使用模型参考自适应控制器(MRAC)验证模拟中的生成轨迹,并在物理机器人中实现它们,以处理携带有效载荷的未知动态。在一系列研究中,我们发现:(i)U形成本函数在规划合作机器人DUO方面是有效的,并且(ii)任务效率并不总是与系绳网的长度成比例。鉴于环境配置,我们的框架可以衡量最佳净长度。为了我们的最佳知识,我们的最初是第一个为系列机器人二人提供此类估算。
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
Participants in political discourse employ rhetorical strategies -- such as hedging, attributions, or denials -- to display varying degrees of belief commitments to claims proposed by themselves or others. Traditionally, political scientists have studied these epistemic phenomena through labor-intensive manual content analysis. We propose to help automate such work through epistemic stance prediction, drawn from research in computational semantics, to distinguish at the clausal level what is asserted, denied, or only ambivalently suggested by the author or other mentioned entities (belief holders). We first develop a simple RoBERTa-based model for multi-source stance predictions that outperforms more complex state-of-the-art modeling. Then we demonstrate its novel application to political science by conducting a large-scale analysis of the Mass Market Manifestos corpus of U.S. political opinion books, where we characterize trends in cited belief holders -- respected allies and opposed bogeymen -- across U.S. political ideologies.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译